5 research outputs found

    Transient Load Recovery Using Strain Measurements and Model Reduction

    Get PDF
    A transient load is defined as a loading condition where the magnitude of the load changes rapidly in a short period of time. Impact loads are a common example of transient loading. It is well known that impact loads can have disastrous effect on the structure compared loads applied over a longer period of time. An identification of impact loads is an important aspect in the design of structures. A direct identification of applied force on a structure through use of force transducers is not possible under all situations. In such cases, the structural response could be used to recover the imposed loading. Various structural responses such as displacement, stress or strain could be used to recover the imposed loads. However, this thesis focuses on a use of strain data to recover impact loads acting on a component. The strain data is extracted by placing strain gages at different locations on the component. A selection of the location for strain gages is tricky because the accuracy of the load recovery is sensitive to the position of the sensors. A D-optimal technique is used in this thesis to determine the optimum location of sensors so that most accurate results for load estimates are obtained. With sensors placed at the optimum locations, the strain data was extracted. The extracted strain data was used in conjunction with component’s modal data to approximate mode participation factors. The approximated mode participation factors and displacement mode shapes were next used to approximate displacements, velocities and accelerations. Finally, this information was used to estimate the loads acting on the component. A drawback of this approach is that it requires modal information of the entire structure be available. However, practical computational considerations limit the use of information of all modes. To overcome this difficulty, reduced order modeling based on Craig-Bampton model reduction is used. It is seen that Craig-Bampton reduction allows for an accurate estimation of imposed loads while utilizing only a small subset of available modal information

    Managing Overheads in Asynchronous Many-Task Runtime Systems

    Get PDF
    Asynchronous Many-Task (AMT) runtime systems are based on the idea of dividing an algorithm into small units of work, known as tasks. The runtime system is then responsible for scheduling and executing these tasks in an efficient manner by taking into account the resources provided to it and the associated data dependencies between the tasks. One of the primary challenges faced by AMTs is managing such fine-grained parallelism and the overheads associated with creating, scheduling and executing tasks. This work develops methodologies for assessing and managing overheads associated with fine-grained task execution in HPX, our exemplar Asynchronous Many-Task runtime system. Known optimization techniques, viz. active message coalescing, task inlining and parallel loop iteration chunking are applied to HPX. Active message coalescing, where messages bound to the same destination are aggregated into a single message, is presented as a solution to minimize overheads associated with fine-grained communications. Methodologies and metrics for analyzing fine-grained communication overheads are developed. The metrics identified and implemented in this research aid in evaluating network efficiency by giving us an intrinsic view of the underlying network overhead that would be difficult to measure using conventional methods. Task inlining, a method that allows runtime systems to manage the overheads introduced by a large number of tasks by merging tasks together into one thread of execution, is presented as a technique for minimizing fine-grained task overheads. A runtime policy that dynamically decides whether to inline a task is developed and evaluated on different processor architectures. A methodology to derive a largely machine independent constant that allows controlling task granularity is developed. Finally, the machine independent constant derived in the context of task inlining is applied to chunking of parallel loop iterations, which confirms its applicability to reduce overheads, in the context of finding the optimal chunk size of the combined loop iterations

    Asynchronous Execution of Python Code on Task Based Runtime Systems

    Get PDF
    Despite advancements in the areas of parallel and distributed computing, the complexity of programming on High Performance Computing (HPC) resources has deterred many domain experts, especially in the areas of machine learning and artificial intelligence (AI), from utilizing performance benefits of such systems. Researchers and scientists favor high-productivity languages to avoid the inconvenience of programming in low-level languages and costs of acquiring the necessary skills required for programming at this level. In recent years, Python, with the support of linear algebra libraries like NumPy, has gained popularity despite facing limitations which prevent this code from distributed runs. Here we present a solution which maintains both high level programming abstractions as well as parallel and distributed efficiency. Phylanx, is an asynchronous array processing toolkit which transforms Python and NumPy operations into code which can be executed in parallel on HPC resources by mapping Python and NumPy functions and variables into a dependency tree executed by HPX, a general purpose, parallel, task-based runtime system written in C++. Phylanx additionally provides introspection and visualization capabilities for debugging and performance analysis. We have tested the foundations of our approach by comparing our implementation of widely used machine learning algorithms to accepted NumPy standards
    corecore